Goto

Collaborating Authors

 mental illness


Health experts call for AI addiction to be classed as a mental illness - as sufferers report feeling suicidal when separated from their favourite chatbot

Daily Mail - Science & tech

'They're going to come for me': Ex-FBI deputy director reveals he's'living in fear' as he issues astonishing warning House Republican makes shock claim about Trump assassination attempt being'an inside job' Truth about'budget Ozempic' supplement that'eradicates hunger': Where to get it, precisely how to take it, how fast you'll lose weight... and embarrassing side effects to know Family of woman abducted as a newborn describe fresh tragedy involving her'twin' brother... as kidnap victim reveals she is pregnant and shocking name she plans to give baby Horror in Portland as'disgruntled former employee' crashes into athletics club in car'packed with EXPLOSIVES' 'She's spiralling badly': How Meghan and Harry have burned ALL their bridges as insiders reveal spectacular fallout with Anna Wintour and Kardashians, money woes - and'problems' that are worse than anyone realises McDonald's phases out free refills with patronizing sign as customers rage'we understand we'll eat elsewhere' I had agonising acid reflux every day - but then overnight it stopped thanks to something you can buy in any supermarket. Family suffers unimaginable second tragedy as Kansas State freshman, 19, dies in frat house fall 13 years after his sister's death I was abused by expat millionaires in a Dubai hotel and left with horrific injuries. Alleged JPMorgan sex slave scandal makes me think of my female bosses... and the shocking office nickname one brazen colleague earned: KENNEDY Moment Italian waiter shows off his football skills - only to backheel ball into wine glass that smashes in customer's face Gunfire erupts outside Chris Brown's LA mansion amid Rihanna assault legal drama What Hollywood insiders are saying about those Harry Styles sexuality rumors after shock Zoe Kravitz engagement... as friends finally address the'Larry' gossip Health experts are calling for AI chatbot addiction to be recognised as a mental illness, as the number of supposed cases climbs. On online forums, more and more teenagers and young adults are now saying they feel'addicted' to their AI companions and struggle to kick the habit. These young users spend hours every day roleplaying complex fantasies, venting their frustrations, and seeking emotional connection with digital companions.


Psychiatry has finally found an objective way to spot mental illness

New Scientist

"It seems like this past week has been quite challenging for you," a disembodied voice tells me, before proceeding to ask a series of increasingly personal questions. "Have you been feeling down or depressed?" "Can you describe what this feeling has been like for you?" "Does the feeling lift at all when something good happens?" When I respond to each one, my chatbot interviewer thanks me for my honesty and empathises with any issues. By the end of the conversation, I will have also spoken about my sleep patterns, sex drive and appetite for food.


For the First Time, Mutations in a Single Gene Have Been Linked to Mental Illness

WIRED

Research links variations in the gene GRIN2A to a higher risk of developing schizophrenia and other forms of mental illness. A team of physicians specializing in genetics and neurology discovered that mental illnesses such as schizophrenia are closely linked to mutations in the GRIN2A gene. The scientists mantain that identifying this genetic risk factor opens up the possibility of designing preventive therapies in the future. The GRIN2A gene regulates communication between neurons by producing the GluN2A protein. When functioning optimally, it promotes the transmission of electrical signals between nerve cells and facilitates essential processes such as learning, memory, language, and brain development.


Evolution of intelligence in our ancestors may have come at a cost

New Scientist

A timeline of genetic changes in millions of years of human evolution shows that variants linked to higher intelligence appeared most rapidly around 500,000 years ago, and were closely followed by mutations that made us more prone to mental illness. The findings suggest a "trade-off" in brain evolution between intelligence and psychiatric issues, says Ilan Libedinsky at the Center for Neurogenomics and Cognitive Research in Amsterdam, the Netherlands. Why did humans evolve big brains? "Mutations related to psychiatric disorders apparently involve part of the genome that also involves intelligence. So there's an overlap there," says Libedinsky. "[The advances in cognition] may have come at the price of making our brains more vulnerable to mental disorders."


A Startup Used AI to Make a Psychedelic Without the Trip

WIRED

Mindstate Design Labs, backed by Silicon Valley power players, has created what its CEO calls "the least psychedelic psychedelic that's psychoactive." While there's growing evidence that psychedelic drugs can effectively treat severe mental health conditions, especially in cases where traditional treatments have failed, they still come with downsides. Their hallucinogenic effects can be scary and overwhelming, with dosing sessions lasting several hours. Good treatment is heavily reliant on the individual's mindset going into a session and the environment in which they receive it. And though it's rare, psychedelics can sometimes worsen existing mental illness.


Diagnosing Psychiatric Patients: Can Large Language and Machine Learning Models Perform Effectively in Emergency Cases?

Ahammed, Abu Shad, Mukherjee, Sayeri, Obermaisser, Roman

arXiv.org Artificial Intelligence

Mental disorders are clinically significant patterns of behavior that are associated with stress and/or impairment in social, occupational, or family activities. People suffering from such disorders are often misjudged and poorly diagnosed due to a lack of visible symptoms compared to other health complications. During emergency situations, identifying psychiatric issues is that's why challenging but highly required to save patients. In this paper, we have conducted research on how traditional machine learning and large language models (LLM) can assess these psychiatric patients based on their behavioral patterns to provide a diagnostic assessment. Data from emergency psychiatric patients were collected from a rescue station in Germany. Various machine learning models, including Llama 3.1, were used with rescue patient data to assess if the predictive capabilities of the models can serve as an efficient tool for identifying patients with unhealthy mental disorders, especially in rescue cases.


Revealed: The 32 terrifying ways AI could go rogue - from hallucinations to paranoid delusions

Daily Mail - Science & tech

It might sound like a scenario from the most far-fetched of science fiction novels. But scientists have revealed the 32 terrifyingly real ways that AI systems could go rogue. Researchers warn that sufficiently advanced AI might start to develop'behavioural abnormalities' which mirror human psychopathologies. From relatively harmless'Existential Anxiety' to the potentially catastrophic 'Übermenschal Ascendancy', any of these machine mental illnesses could lead to AI escaping human control. As AI systems become more complex and gain the ability to reflect on themselves, scientists are concerned that their errors may go far beyond simple computer bugs.


What is Stigma Attributed to? A Theory-Grounded, Expert-Annotated Interview Corpus for Demystifying Mental-Health Stigma

Meng, Han, Chen, Yancan, Li, Yunan, Yang, Yitian, Lee, Jungup, Zhang, Renwen, Lee, Yi-Chieh

arXiv.org Artificial Intelligence

Mental-health stigma remains a pervasive social problem that hampers treatment-seeking and recovery. Existing resources for training neural models to finely classify such stigma are limited, relying primarily on social-media or synthetic data without theoretical underpinnings. To remedy this gap, we present an expert-annotated, theory-informed corpus of human-chatbot interviews, comprising 4,141 snippets from 684 participants with documented socio-cultural backgrounds. Our experiments benchmark state-of-the-art neural models and empirically unpack the challenges of stigma detection. This dataset can facilitate research on computationally detecting, neutralizing, and counteracting mental-health stigma. Our corpus is openly available at https://github.com/HanMeng2004/Mental-Health-Stigma-Interview-Corpus.


Deconstructing Depression Stigma: Integrating AI-driven Data Collection and Analysis with Causal Knowledge Graphs

Meng, Han, Zhang, Renwen, Wang, Ganyi, Yang, Yitian, Qin, Peinuan, Lee, Jungup, Lee, Yi-Chieh

arXiv.org Artificial Intelligence

Mental-illness stigma is a persistent social problem, hampering both treatment-seeking and recovery. Accordingly, there is a pressing need to understand it more clearly, but analyzing the relevant data is highly labor-intensive. Therefore, we designed a chatbot to engage participants in conversations; coded those conversations qualitatively with AI assistance; and, based on those coding results, built causal knowledge graphs to decode stigma. The results we obtained from 1,002 participants demonstrate that conversation with our chatbot can elicit rich information about people's attitudes toward depression, while our AI-assisted coding was strongly consistent with human-expert coding. Our novel approach combining large language models (LLMs) and causal knowledge graphs uncovered patterns in individual responses and illustrated the interrelationships of psychological constructs in the dataset as a whole. The paper also discusses these findings' implications for HCI researchers in developing digital interventions, decomposing human psychological constructs, and fostering inclusive attitudes.


A Comprehensive Evaluation of Large Language Models on Mental Illnesses in Arabic Context

Zahran, Noureldin, Fouda, Aya E., Hanafy, Radwa J., Fouda, Mohammed E.

arXiv.org Artificial Intelligence

Mental health disorders pose a growing public health concern in the Arab world, emphasizing the need for accessible diagnostic and intervention tools. Large language models (LLMs) offer a promising approach, but their application in Arabic contexts faces challenges including limited labeled datasets, linguistic complexity, and translation biases. This study comprehensively evaluates 8 LLMs, including general multi-lingual models, as well as bi-lingual ones, on diverse mental health datasets (such as AraDepSu, Dreaddit, MedMCQA), investigating the impact of prompt design, language configuration (native Arabic vs. translated English, and vice versa), and few-shot prompting on diagnostic performance. We find that prompt engineering significantly influences LLM scores mainly due to reduced instruction following, with our structured prompt outperforming a less structured variant on multi-class datasets, with an average difference of 14.5\%. While language influence on performance was modest, model selection proved crucial: Phi-3.5 MoE excelled in balanced accuracy, particularly for binary classification, while Mistral NeMo showed superior performance in mean absolute error for severity prediction tasks. Few-shot prompting consistently improved performance, with particularly substantial gains observed for GPT-4o Mini on multi-class classification, boosting accuracy by an average factor of 1.58. These findings underscore the importance of prompt optimization, multilingual analysis, and few-shot learning for developing culturally sensitive and effective LLM-based mental health tools for Arabic-speaking populations.